home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
NOVA - For the NeXT Workstation
/
NOVA - For the NeXT Workstation.iso
/
Documents
/
FAQ
/
Compression
/
compression.faq
< prev
Wrap
Text File
|
1992-12-27
|
95KB
|
2,100 lines
-----Part 1 (part 2 appended)
Archive-name: compression-faq/part1
Last-modified: June 4th, 1992
[*** Short notice: see new question 9 about the WEB 16:1 compressor. ***]
This file is part 1 of a set of Frequently Asked Questions (FAQ) for the
group comp.compression. Certain questions get asked time and again,
and this is an attempt to reduce the bandwidth taken up by these posts
and their associated replies. If you have a question, *please* check
this file before you post. It may save a lot of peoples time.
If you have not already read the overall Usenet introductory material
posted to "news.announce.newusers", please do.
If you don't want to see this FAQ twice a month, please add the
subject line to your kill file. If you have corrections or suggestions
for this FAQ, send them to Jean-loup Gailly <jloup@chorus.fr>.
Thank you.
Part 1 is oriented towards practical usage of compression programs.
Part 2 is more intended for people who want to know how compression works.
Main changes relative to the previous version:
- added arc, desea and zoo for Macintosh (question 2)
- used binhex4.0.bin instead of stuffit-151.hqx to avoid chicken and egg
problem (question 2)
- added question 9: "The WEB 16:1 compressor"
- added the address of Iterated Systems (question 10)
- added information from comp.dsp faq about audio compression (question 14)
- added question 17: "I need source for arithmetic coding"
Contents
========
General questions:
[1] What is this newsgroup about?
[2] What is this .xxx file type?
Where can I find the corresponding compression program?
[3] Where can I get image compression programs?
[4] What is an archiver?
[5] What is the best general purpose compression program?
[6] What is the state of the art in lossless image compression?
[7] Which books should I read?
[8] What about patents on data compression algorithms?
[9] The WEB 16:1 compressor.
[10] What is the state of fractal compression?
[11] What is the V.42bis standard?
[12] I need specs and source for TIFF and CCITT group 4 Fax.
[13] What is JPEG?
[14] Are there algorithms and standards for audio compression?
[15] I need source for the winners of the Dr Dobbs compression contest
[16] I am looking for source of an H.261 codec.
[17] I need source for arithmetic coding
Common problems:
[30] My archive is corrupted!
[31] pkunzip reports a CRC error!
[32] VMS zip is not compatible with pkzip!
Questions which do not really belong to comp.compression:
[50] What is this 'tar' compression program?
[51] I need a CRC algorithm
[52] What about those people who continue to ask frequently asked questions?
[53] Where are FAQ lists archived?
[54] I need specs for graphics formats
[55] Where can I find Lenna and other images?
(Long) introductions to data compression techniques (in part 2)
[70] Introduction to data compression (long)
Huffman and Related Compression Techniques
Arithmetic Coding
Substitutional Compressors
The LZ78 family of compressors
The LZ77 family of compressors
[71] Introduction to MPEG (long)
What is MPEG?
Does it have anything to do with JPEG?
Then what's JBIG and MHEG?
What has MPEG accomplished?
So how does MPEG I work?
What about the audio compression?
So how much does it compress?
What's phase II?
When will all this be finished?
How do I join MPEG?
How do I get the documents, like the MPEG I draft?
[72] What is wavelet theory?
[73] What is the theoretical compression limit?
[74] Introduction to JBIG
[99] Acknowledgments
Search for "Subject: [#]" to get to question number # quickly. Some news
readers can also take advantage of the message digest format used here.
If you know very little about data compression, read question 70 in
part 2 first.
------------------------------------------------------------------------------
~Subject: [1] What is this newsgroup about?
comp.compression is the place to discuss about data compression,
both lossless (for text or data) and lossy (for images, sound, etc..).
If you only want to find a particular compression program for a
particular operating system, please read first this FAQ and the
article "How to find sources" which is regularly posted in
news.answers.
If you can't resist posting, other groups are probably more appropriate
(comp.binaries.ibm.pc.wanted, comp.sources.wanted, comp.sys.mac.wanted,
alt.graphics.pixutils). Please post your request in comp.compression only
as a last resource.
Please do not post any program in binary form to comp.compression.
Very short sources can be posted, but long sources should be be posted
to the specialized source groups, such as comp.sources.* or alt.sources.
------------------------------------------------------------------------------
~Subject: [2] What is this .xxx file type?
Where can I find the corresponding compression program?
For most programs, one US and one European ftp site are given.
(wuarchive.wustl.edu: 128.152.135.4, garbo.uwasa.fi: 128.214.87.1)
Many other sites (in particular wsmr-simtel20.army.mil: 192.88.110.2)
have the same programs.
To keep this list to a reasonable size, many programs are not
mentioned here. Additional information can be found in the file
ux1.cso.uiuc.edu:/doc/pcnet/compression [128.174.5.59] maintained by
David Lemson (lemson@uiuc.edu). When several programs can handle
the same archive format, only one of them is given.
For Macintosh programs, look on sumex-aim.stanford.edu:/info-mac [36.44.0.6].
For VM/CMS, look on vmd.cso.uiuc.edu:/public.477 [128.174.5.98].
For Atari, look on terminator.cc.umich.edu:/atari/archivers [141.211.164.8]
For Amiga, look on ab20.larc.nasa.gov:/amiga/utils/archivers [128.155.23.64]
If you don't know how to use ftp or don't have ftp access, read the
article "How to find sources" which is regularly posted in news.answers.
If you can't find a program given below, it is likely that a newer
version exists in the same directory. (Tell me <jloup@chorus.fr>)
ext: produced by or read by
.arc: arc, pkarc (MSDOS)
wuarchive.wustl.edu:/mirrors/msdos/starter/pk361.exe
garbo.uwasa.fi:/pc/arcers/pk361.exe
arc (Unix)
wuarchive.wustl.edu:/mirrors/misc/unix/arc521e.tar-z
garbo.uwasa.fi:/unix/arcers/arc.tar.Z
Contact: Howard Chu <hyc@umix.cc.umich.edu>
arc (VMS)
wuarchive.wustl.edu:/packages/compression/vax-vms/arc.exe
arcmac (Mac)
mac.archive.umich.edu:/mac/utilities/compressionapps/arcmac.hqx
.arj: arj (MSDOS)
wuarchive.wustl.edu:/mirrors/msdos/arc-lbr/arj230.exe
garbo.uwasa.fi:/pc/arcers/arj230ng.exe
unarj (Unix). There is *no* arj for Unix. Don't post a request.
wuarchive.wustl.edu:/mirrors/misc/unix/unarj230.tar-z
garbo.uwasa.fi:/unix/arcers/unarj221.tar.Z
Contact: Robert K Jung <robjung@world.std.com>
.cpt: Compact Pro (Macintosh)
sumex-aim.stanford.edu:/info-mac/util/compact-pro-132.hqx [36.44.0.6]
.gif: gif files are images compressed with the LZW algorithm. See the
comp.graphics FAQ list for programs manipulating .gif files.
.hqx: Macintosh BinHex format
for Mac:
mac.archive.umich.edu:/mac/utilities/compressionapps/binhex4.0.bin
for Unix:
sumex-aim.stanford.edu:/info-mac/unix/mcvert-165.shar [36.44.0.6]
.lzh: lha (MSDOS)
wuarchive.wustl.edu:/mirrors/msdos/arc-lbr/lha213.exe
garbo.uwasa.fi:/pc/arcers/lha213.exe
lharc (Unix). Warning: lharc can extract .lzh files created by
lharc 1.xx but not those created by lha. See lha for Unix below.
wuarchive.wustl.edu:/mirrors/misc/unix/lharc102a.tar-z
garbo.uwasa.fi:/unix/arcers/lharcsrc.zoo
lha (Unix) Warning: all doc is in Japanese.
garbo.uwasa.fi:/unix/arcers/lha-1.00.tar.Z
ftp.kuis.kyoto-u.ac.jp:/utils/lha-1.00.tar.Z
Contact: lha-admin@oki.co.jp
lha (Mac)
(mac.archive.umich.edu:/mac/utilities/compressionapps/maclha2.0.cpt.hqx
lharc (VMS). Same warning as for Unix lharc.
wuarchive.wustl.edu:/packages/compression/vax-vms/lharc.exe
.pak: pak (MSDOS)
wuarchive.wustl.edu:/mirrors/msdos/arc-lbr/pak251.exe
garbo.uwasa.fi:/pc/arcers/pak251.exe
.pit: PackIt (Macintosh)
for Mac:
sumex-aim.stanford.edu:/info-mac/util/stuffit-151.hqx [36.44.0.6]
for Unix:
sumex-aim.stanford.edu:/info-mac/unix/mcvert-165.shar [36.44.0.6]
.sea: self-extracting archive (Macintosh)
Run the file to extract it. You can also extract it with:
mac.archive.umich.edu:/mac/utilities/compressionapps/desea1.11.cpt.hqx
.sit: Stuffit (Macintosh)
for Mac:
sumex-aim.stanford.edu:/info-mac/util/stuffit-151.hqx [36.44.0.6]
for Unix:
sumex-aim.stanford.edu:/info-mac/unix/unsit-15.shar [36.44.0.6]
.tar: tar is *not* a compression program. However, to be kind for you:
for MSDOS
wuarchive.wustl.edu:/mirrors/msdos/starter/tarread.exe
garbo.uwasa.fi:/pc/unix/tar4dos.zoo
for Unix
tar (you have it already. To extract: tar xvf file.tar)
for VMS
wuarchive.wustl.edu:/packages/compression/vax-vms/tar.exe
for Macintosh
sumex-aim.stanford.edu:/info-mac/util/tar-30.hqx
.tar.Z, .tar-z, .taz: tar + compress
For Unix: zcat file.tar.Z | tar xvf -
Other OS: first uncompress (see .Z below) then untar (see .tar above)
.zip: pkzip 1.10 (MSDOS).
wuarchive.wustl.edu:/mirrors/msdos/zip/pkz110eu.exe.
garbo.uwasa.fi:/pc/arcers/pkz110eu.exe.
Note: pkz110eu.exe is an 'export' version without encryption.
ux1.cso.uiuc.edu:/pc/exec-pc/pkz193a.exe [128.174.5.59]
Note: pkzip 1.93a is an alpha version.
pkz201.exe is a hacked (illegal) copy of pkz193a.exe
zip 1.0 and unzip 4.2 (Unix, MSDOS, VMS, OS/2)
wuarchive.wustl.edu:/mirrors/misc/unix/zip10ex.zip
wuarchive.wustl.edu:/mirrors/misc/unix/unzip42.tar-z
wuarchive.wustl.edu:/mirrors3/garbo.uwasa.fi/arcutil/zcrypt10.zip
Non US residents must get the encryption code from garbo (see below)
garbo.uwasa.fi:/unix/arcers/zip10ex.zip
garbo.uwasa.fi:/unix/arcers/unzip42.tar.Z.
garbo.uwasa.fi:/pc/arcutil/zcrypt10.zip (encryption code)
Contact: zip-bugs@cs.ucla.edu
zip 1.0 and unzip 4.2 (Mac)
valeria.cs.ucla.edu:/info-zip/Mac/zip_uzip.hqx
sumex-aim.stanford.edu:/info-mac/util/unzip-42.hqx
.zoo: zoo 2.10 (MSDOS)
wuarchive.wustl.edu:/mirrors/msdos/zoo/zoo210.exe
garbo.uwasa.fi:/pc/arcers/zoo210.exe
zoo 2.10 (Unix, VMS)
wuarchive.wustl.edu:/mirrors/misc/unix/zoo210.tar-z
garbo.uwasa.fi:/unix/arcers/zoo210.tar.Z
zoo (Mac)
mac.archive.umich.edu:/mac/utilities/compressionapps/maczoo.sit.hqx
Contact: Rahul Dhesi <dhesi@cirrus.com>
.F: freeze (Unix)
wuarchive.wustl.edu:/usenet/comp.sources.misc/volume25/freeze/part0[1-2].Z
ftp.inria.fr:/system/arch-compr/freeze-2.3.2.tar.Z
Contact: Leonid A. Broukhis <leo@s514.ipmce.su>
.Y: yabba (Unix, VMS, ...)
wuarchive.wustl.edu:/usenet/comp.sources.unix/volume24/yabbawhap/part0[1-4].Z
ftp.inria.fr:/system/arch-compr/yabba.tar.Z
Contact: Dan Bernstein <brnstnd@nyu.edu>
.Z: compress (Unix)
It is likely that your Unix system has 'compress' already. Otherwise:
wuarchive.wustl.edu:/packages/compression/compress-4.1.tar
(not in .Z format to avoid chicken and egg problem)
compress (MSDOS)
wuarchive.wustl.edu:/mirrors/msdos/compress/comp430[ds].zip
garbo.uwasa.fi:/pc/unix/comp430d.zip
compress (Macintosh)
sumex-aim.stanford.edu:/info-mac/util/maccompress-32.hqx
------------------------------------------------------------------------------
~Subject: [3] Where can I get image compression programs?
JPEG:
Source code for most any machine:
ftp.uu.net:/graphics/jpeg/jpegsrc.v3.tar.Z [137.39.1.9]
nic.funet.fi:/pub/graphics/programs/jpeg/jpegsrc.v3.tar.Z [128.214.6.100]
Contact: jpeg-info@uunet.uu.net (Independent JPEG Group)
xv, an image viewer which can read JPEG pictures, is available in
ftp.cicb.fr:/pub/X11R5/contrib/xv-2.20.tar.Z [129.20.128.2]
epic:
whitechapel.media.mit.edu:/pub/epic.tar.Z [18.85.0.125]
The "Lenna" test image is available as part of the EPIC package,
where it is named "test_image".
compfits:
uwila.cfht.hawaii.edu:/pub/compfits/compfits.tar.Z [128.171.80.50]
Contact: Jim Wright <jwright@cfht.hawaii.edu>
fitspress:
128.103.40.79:/pub/fitspress08.tar.Z
tiff:
For source and sample images, see question 12 below.
------------------------------------------------------------------------------
~Subject: [4] What is an archiver?
There is a distinction between archivers and other compression
programs:
- an archiver takes several input files, compresses them and produces
a single archive file. Examples are arc, arj, lha, zip, zoo.
- other compression programs create one compressed file for each
input file. Examples are freeze, yabba, compress. Such programs
are often combined with tar to create compressed archives (see
question 50: "What is this tar compression program?").
------------------------------------------------------------------------------
~Subject: [5] What is the best general purpose compression program?
The answer is: it depends. (You did not expect a definitive answer,
did you?)
It depends whether you favor speed, compression ratio, a standard and
widely used archive format, the number of features, etc... Just as
for text editors, personal taste plays an important role. compress has
4 options, arj 2.30 has about 130 options; different people like
different programs. *Please* do not start or continue flame wars on
such matters of taste.
The only objective comparisons are speed and compression ratio. Here
is a short table comparing various programs on a 33Mhz Compaq 386.
All programs have been run on Unix SVR4, except pkzip and arj which
only run on MSDOS. Detailed benchmarks have been posted in
comp.compression by Peter Gutmann <pgut1@cs.aukuni.ac.nz>.
*Please* do not post your own benchmarks made on your own files that
nobody else can access. If you think that you must absolutely post yet
another benchmark, make sure that your test files are available by
anonymous ftp.
The programs compared here were chosen because they are the most popular
or because they run on Unix and source is available. For ftp
information, see above. Two programs (hpack and comp-2) have been added
because they achieve better compression (at the expense of speed)
and one program (lzrw3-a) has been added because it favors speed at
the expense of compression:
- comp-2 is in wuarchive.wustl.edu:/mirrors/msdos/ddjmag/ddj9102.zip
(inner zip file nelson.zip),
- hpack is in wuarchive.wustl.edu:/mirrors/misc/unix/hpack75a.tar-z
and garbo.uwasa.fi:/unix/arcers/hpack75a.tar.Z
- sirius.ucs.adelaide.edu.au:/pub/compression/lzrw3-a.c [129.127.40.3]
The 14 files used in the comparison are from the standard Calgary
Text Compression Corpus, available in
fsa.cpsc.ucalgary.ca:/pub/text.compression.corpus.tar.Z [136.159.2.1]
The whole corpus includes 18 files, but the 4 files paper[3-6] are
generally omitted in benchmarks. It contains several kinds of file
(ascii, binary, image, etc...) but has a bias towards large files.
You may well get different ratings on the typical mix of files that
you use daily, so keep in mind that the comparisons given below are
only indicative.
The programs are ordered by decreasing total compressed size. For a
fair comparison between archivers and other programs, this size is
only the size of the compressed data, not the archive size.
The programs were run on an idle machine, so the elapsed time
is significant and can be used to compare Unix and MSDOS programs.
[Note: I still have to add all decompression times.]
size lzrw3a compress lharc yabba pkzip freeze
version: 4.0 1.02 1.0 1.10 2.3.2
options: -m300000
------ ----- ------ ------ ------ ------ ------
bib 111261 49040 46528 46502 40456 41354 41515
book1 768771 416131 332056 369479 306813 350560 344793
book2 610856 274371 250759 252540 229851 232589 230861
geo 102400 84214 77777 70955 76695 76172 68626
news 377109 191291 182121 166048 168287 157326 155783
obj1 21504 12647 14048 10748 13859 10546 10453
obj2 246814 108040 128659 90848 114323 90130 85500
paper1 53161 24522 25077 21748 22453 20041 20021
paper2 82199 39479 36161 35275 32733 32867 32693
pic 513216 111000 62215 61394 65377 63805 53291
progc 39611 17919 19143 15399 17064 14164 14143
progl 71646 24358 27148 18760 23512 17255 17064
progp 49379 16801 19209 12792 16617 11877 11686
trans 93695 30292 38240 28092 31300 23135 22861
3,141,622 1,400,105 1,259,141 1,200,580 1,159,340 1,141,821 1,109,290
real 0m35s 0m59s 5m03s 2m40s 5m09s
user 0m25s 0m29s 4m29s 1m46s 4m04s
sys 0m05s 0m10s 0m07s 0m18s 0m11s
MSDOS: 1m39s
zip zoo lha arj pkzip hpack comp-2
1.0 2.10 0.04 & 2.13 2.30 1.93a 0.75a
-9 ah -jm -ex
------ ------ ------ ------ ------ ------ ------
bib 40717 40742 40740 36090 35186 35619 29840
book1 339932 339076 339074 318382 313566 306876 237380
book2 229419 228444 228442 210521 207204 208486 174085
geo 69837 68576 68574 69209 68698 58976 64590
news 154865 155086 155084 146855 144954 141608 128047
obj1 10522 10312 10310 10333 10307 10572 10819
obj2 86661 84983 84981 82052 81213 80806 85465
paper1 19761 19678 19676 18710 18519 18607 16895
paper2 32296 32098 32096 30034 29566 29825 25453
pic 56828 52223 52221 53578 52777 51778 55461
progc 13955 13943 13941 13408 13363 13475 12896
progl 16954 16916 16914 16408 16148 16586 17354
progp 11558 11509 11507 11308 11214 11647 11668
trans 22737 22580 22578 20046 19808 20506 21023
1,106,013 1,096,166 1,096,138 1,036,934 1,022,523 1,005,367 890,976
real 3m28s 4m07s 6m03s 1h22m17s 27m05s
user 1m45s 3m47s 4m23s 1h20m46s 19m27s
sys 0m11s 0m04s 0m08s 0m12s 2m03s
MSDOS: 1m49s 2m41s 1m55s
Notes:
- zip 2.0 is not included in this comparison since it will be released
only when pkzip 2.0 is released. (Compression is comparable to that of
pkzip 1.93a.)
- the compressed data for 'zoo ah' is always two bytes longer than for
lha. This is simply because both programs are derived from the same
source (ar002, written by Haruhiko Okumura).
- hpack 0.75a gives slightly different results on SunOS (undeterministic
behaviour still under investigation).
- the MSDOS versions are all optimized with assembler code and were run
on a RAM disk. So it is not surprising that they go faster than their
Unix equivalent.
------------------------------------------------------------------------------
~Subject: [6] What is the state of the art in lossless image compression?
The current state-of-the-art is the JBIG algorithm. For an
introduction to JBIG, see question 54 in part 2.
JBIG works best on bi-level images (like faxes) and also works well on
Gray-coded grey scale images up to about six or so bits per pixel. You
just apply JBIG to the bit planes individually. For more bits/pixel,
lossless JPEG provides better performance, sometimes. (For JPEG, see
question 13 below.)
You can find a description of JBIG in ISO/IEC CD 11544, contained in
document ISO/IEC JTC1/SC2/N2285. The only way to get it is to ask
your National Standards Body for a copy.
------------------------------------------------------------------------------
~Subject: [7] Which books should I read?
[BWC 1989] Bell, T.C, Witten, I.H, and Cleary, J.G. "Text Compression",
Prentice-Hall 1989. ISBN: 0-13-911991-4. Price: approx. US$40
The reference on text data compression.
[Nel 1991] Mark Nelson, "The Data Compression Book"
M&T Books, Redwood City, CA, 1991. ISBN 1-55851-216-0.
Price $36.95 including two 5" PC-compatible disks bearing
all the source code printed in the book.
A practical introduction to data compression.
The book is targeted at a person who is comfortable reading C code but
doesn't know anything about data compression. Its stated goal is to get
you up to the point where you are competent to program standard
compression algorithms.
[Will 1990] Williams, R. "Adaptive Data Compression", Kluwer Books, 1990.
ISBN: 0-7923-9085-7. Price: US$75.
Reviews the field of text data compression and then addresses the
problem of compressing rapidly changing data streams.
[Stor 1988] Storer, J.A. "Data Compression: Methods and Theory", Computer
Science Press, Rockville, MD. ISBN: 0-88175-161-8.
A survey of various compression techniques, mainly statistical
non-arithmetic compression and LZSS compression. Includes complete Pascal
code for a series of LZ78 variants.
Review papers:
[BWC 1989] Bell, T.C, Witten, I.H, and Cleary, J.G. "Modeling for Text
Compression", ACM Computing Surveys, Vol.21, No.4 (December 1989), p.557
A good general overview of compression techniques (as well as modeling for
text compression); the condensed version of "Text Compression".
[Lele 1987] Lelewer, D.A, and Hirschberg, D.S. "Data Compression", ACM
Computing Surveys, Vol.19, No.3 (September 1987), p.261.
A survey of data compression techniques which concentrates on Huffman
compression and makes only passing mention of other techniques.
------------------------------------------------------------------------------
~Subject: [8] What about patents on data compression algorithms?
[Note: the appropriate group for discussing software patents is
comp.patents (or misc.legal.computing), not comp.compression.]
- The Gibson & Graybill patent 5,049,881 is the most general.
Claims 4 and 12 cover about any LZ algorithm using hashing, and could
even be interpreted as applying to the LZ78 family. (See below,
"Introduction to data compression" for the meaning of 'LZ').
4. A compression method for compressing a stream of input data into
a compressed stream of output data based on a minimum number of
characters in each input data string to be compressed, said
compression method comprising the creation of a hash table, hashing
each occurrence of a string of input data and subsequently searching
for identical strings of input data and if such an identical string
of input data is located whose string size is at least equal to the
minimum compression size selected, compressing the second and all
subsequent occurrences of such identical string of data, if a string
of data is located which does not match to a previously compressed
string of data, storing such data as uncompressed data, and for each
input strings after each hash is used to find a possible previous
match location of the string, the location of the string is stored
in the hash table, thereby using the previously processed data to
act as a compression dictionary.
Claim 12 is identical, with 'method' replaced with 'apparatus'.
Since the 'minimal compression size' can be as small as 2, the claim
covers any dictionary technique of the LZ family.
- Phil Katz, author of pkzip, also has a patent on LZ77 (5,051,745)
but the claims only apply to sorted hash tables, and when the hash
table is substantially smaller than the window size.
- The LZW algorithm used in 'compress' is patented by IBM (4,814,746)
and Unisys (4,558,302). Unisys has licensed it for use in the V.42bis
compression standard. (See question 11 on V.42bis below.)
- AP coding is patented by Storer (4,876,541). (Get the yabba package
for source code, see question 2 above, file type .Y)
- Fiala and Greene have a patent (pending?) on the algorithms they
published in Comm.ACM, April 89. One of their algorithms is used in lha
and zoo, and was used in zip 0.8.
- IBM patented (5,001,478) the idea of combining a history buffer (the
LZ77 technique) and a lexicon (as in LZ78).
- IBM holds a patent on the Q-coder implementation of arithmetic
coding. The arithmetic coding option of the JPEG standard requires
use of the patented algorithm. (See the JPEG FAQ for details.)
Here are some references on data compression patents, taken from the
list maintained by Michael Ernst <mernst@theory.lcs.mit.edu> in
mintaka.lcs.mit.edu:/mitlpf/ai/patent-list (or patent-list.Z).
4,464,650
Apparatus and method for compressing data signals and restoring the
compressed data signals
inventors Lempel, Ziv, Cohn, Eastman
assignees Sperry Corporation and At&T Bell Laboratories
filed 8/10/81, granted 8/7/84
4,558,302
High speed data compression and decompression apparatus and method
inventor Welch
assignee Sperry Corporation (now Unisys)
filed 6/20/83, granted 12/10/85
The text for this patent can be ftped from ftp.uu.net as pub/lzw-patent.Z
4,814,746
Data compression method
inventors Victor S. Miller, Mark N. Wegman
assignee IBM
filed 8/11/86, granted 3/21/89
4,876,541
Stem [sic] for dynamically compressing and decompressing electronic data
inventor James A. Storer
assignee Data Compression Corporation
filed 10/15/87, granted 10/24/89
4,955,066
Compressing and Decompressing Text Files
inventor Notenboom, L.A.
assignee Microsoft
filed 10/13/89, granted 09/04/90
5,001,478
Method of Encoding Compressed Data
filed 1989-12-28
granted 1991-03-19
inventor Michael E. Nagy
assignee IBM
5,049,881
Apparatus and method for very high data rate-compression incorporating
lossless data compression and expansion utilizing a hashing technique
inventors Dean K. Gibson, Mark D. Graybill
assignee Intersecting Concepts, Inc.
filed 6/18/90, granted 9/17/91
cites McIntosh 3,914,586, Johannesson 4,087,788, Eastman 4,464,650,
Finn 4,560,976, Tsukiyama 4,586,027 and 4,758,899, Kunishi 4,677,649,
Mathes 4,682,150
5,051,745
String searcher, and compressor using same
inventor Phillip W. Katz (author of pkzip)
filed 8/21/90, granted 9/24/91
cites MacCrisken 4,730,348 and Hong 4,961,139
Data Compression with Finite Windows, Comm.ACM, 32,4 (1989) 490-595.
inventors Fiala,E.R., and Greene,D.H.
------------------------------------------------------------------------------
[9] The WEB 16:1 compressor.
[WARNING: this topic has generated the greatest volume of news in the
history of comp.compression. Read this before posting on this subject.]
(a) What the press says
April 20, 1992 Byte Week Vol 4. No. 25:
"In an announcement that has generated high interest - and more than a
bit of skepticism - WEB Technologies (Smyrna, GA) says it has
developed a utility that will compress files of greater than 64KB in
size to about 1/16th their original length. Furthermore, WEB says its
DataFiles/16 program can shrink files it has already compressed."
[...]
"A week after our preliminary test, WEB showed us the program successfully
compressing a file without losing any data. But we have not been able
to test this latest beta release ourselves."
[...]
"WEB, in fact, says that virtually any amount of data can be squeezed
to under 1024 bytes by using DataFiles/16 to compress its own output
multiple times."
(b) First details, by John Wallace <buckeye@spf.trw.com>:
I called WEB at (404)514-8000 and they sent me some product
literature as well as chatting for a few minutes with me on the phone.
Their product is called DataFiles/16, and their claims for it are
roughly those heard on the net.
According to their flier:
"DataFiles/16 will compress all types of binary files to approximately
one-sixteenth of their original size ... regardless of the type of
file (word processing document, spreadsheet file, image file,
executable file, etc.), NO DATA WILL BE LOST by DataFiles/16."
(Their capitalizations; 16:1 compression only promised for files >64K
bytes in length.)
"Performed on a 386/25 machine, the program can complete a
compression/decompression cycle on one megabyte of data in less than
thirty seconds"
"The compressed output file created by DataFiles/16 can be used as the
input file to subsequent executions of the program. This feature of
the utility is known as recursive or iterative compression, and will
enable you to compress your data files to a tiny fraction of the
original size. In fact, virtually any amount of computer data can
be compressed to under 1024 bytes using DataFiles/16 to compress its
own output files muliple times. Then, by repeating in reverse the
steps taken to perform the recusive compression, all original data
can be decompressed to its original form without the loss of a single
bit."
They had a table that showed the expected size of resulting files,
with the warning that "Your actual compresion results may vary
slightly from the figures shown". Here is my abridged version of their
table.
---------- Iteration -------
INPUT 1 2 3 4 Ratio
1K 630 1.6:1
16K 1.5K 812 20:1
64K 4K 1K 644 101:1
512K 30K 2K 938 558:1
8M 490K 14K 1.5K 798 10512:1
64M 3M 40K 3K 994 67513:1
Their flier also claims:
"Constant levels of compression across ALL TYPES of FILES"
"Convenient, single floppy DATA TRANSPORTATION"
From my telephone conversation, I was was assured that this is an
actual compression program. Decompression is done by using only the
data in the compressed file; there are no hidden or extra files.
(c) More information, by Rafael Ramirez <rafael.ramirez@channel1.com>:
Today (Tuesday, 28th) I got a call from Earl Bradley of Web
who now says that they have put off releasing a software version of
the algorithm because they are close to signing a major contract with
a big company to put the algorithm in silicon. He said he could not
name the company due to non-disclosure agreements, but that they had
run extensive independent tests of their own and verified that the
algorithm works. [...]
Mr. Bradley went on to say that Web will not be sending out any
more copies to magazines and that they had even recalled the copy they
had sent to Byte. He claimed that he told the guy at Byte that the
version they had at the time had just been translated to assembler and
had some bugs, but that the guy at Byte kept insisting that they send
a copy anyway, and so of course the version Byte had didn't work.
He said the algorithm is so simple that he doesn't want anybody
getting their hands on it and copying it even though he said they
have filed a patent on it. [...] Mr. Bradley said the silicon version
would hold up much better to patent enforcement and be harder to copy.
He claimed that the algorithm takes up about 4K of code, uses only
integer math, and the current software implementation only uses a 65K
buffer. He said the silicon version would likely use a parallel
version and work in real-time.
He said they will be sending out copies to about seven companies
that want to license the technology for various applications but he
could not give out any names due to non-disclosure agreements. He
hoped that in about two weeks he will be able to make an announcement
and said he would call me when he could provide independent
verification that the algorithm works. For now, we can only wait.
He also said they have not as yet sold anything, but that he's
been traveling constantly for the last three weeks (presumably to
the seven companies he mentioned) and that he hasn't slept very
well lately just thinking about all the applications the algorithm
could be applied to.
And he confirmed that each pass will get 16:1 compression as
long as the data is >64K, and that regardless, any file should be
able to be compressed to less than 1024 bytes after enough passes.
I asked if he is claiming that they can compress ANY data including
data that is already compressed, and he just answered by saying
that they had tested the program by compressing PKZIP files, and
other files and that they all compressed as claimed (which of course
still doesn't answer the question).
(d) The interpretation of the claims
The biggest controversy is over the claim to compress "all types of
files". As noted above by Rafael Ramirez, we do not know with
certainty if WEB claims to compress *all* files greater than 64K
bytes, or just *most* files. The WEB flier only says all *types* of
files, not *all* files. Keep this in mind when reading the
impossibility proof given below.
(e) The impossiblity proofs.
It is impossible for a given program to compress without loss *all*
files greater than a certain size by at least one bit. This can be
proven by a simple counting argument. (Many other proofs have been
posted on comp.compression, *please* do not post yet another one.)
Assume that the program can compress without loss all files of size >= N
bits. Compress with this program all the 2^N files which have
exactly N bits. All compressed files have at most N-1 bits, so there
are at most 2^(N-1) different compressed files. So at least two
different input files must compress to the same output file. (Actually
at least half of them, but two suffice for the proof.) Hence the
compression program cannot be lossless.
This argument applies of course to WEB's case (take N = 64K*8 bits).
Note that no assumption is made about the compression algorithm.
The proof applies to *any* algorithm, including those using an
external dictionary, or repeated application of another algorithm,
or combination of different algorithms, or representation of the
data as formulas, etc... All schemes are subject to the counting argument.
There is no need to use information theory to provide a proof, just
basic mathematics.
This assumes of course that the information available to the decompressor
is only the bit sequence of the compressed data. If external information
such as a file name or a number of iterations is necessary to decompress
the data, the bits providing the extra information must be included in
the bit count of the compressed data. (Otherwise, it would be sufficient
to consider any input data as a number, use this as the iteration
count or file name, and pretend that the compressed size is zero.)
(d) Conclusion
Most readers of comp.compression are tired of this thread. Please,
please, do not post another article "I know this has been beaten to
death, but...".
The consensus is that we have to wait until WEB delivers a real product
which can be independently tested. Only then will it be possible
to know exactly what the product can and cannot do.
[See also question 73 "What is the theoretical compression limit?" in
part 2 of this FAQ.]
------------------------------------------------------------------------------
~Subject: [10] What is the state of fractal compression?
from Tal Kubo <kubo@zariski.harvard.edu>:
According to Barnsley's book 'Fractals Everywhere', this method is
based on a measure of deviation between a given image and its
approximation by an IFS code. The Collage Theorem states that there is
a convergent process to minimize this deviation. Unfortunately,
according to an article Barnsley wrote for BYTE a few years ago, this
convergence was rather slow, about 100 hours on a Cray, unless assisted by
a person.
Barnsley et al are not divulging any technical information beyond the
meager bit in 'Fractals Everywhere'. The book explains the idea of IFS
codes at length, but is vague about the application of the Collage theorem
to specific compression problems.
There is reason to believe that Barnsley's company has
*no algorithm* which takes a given reasonable image and achieves
the compression ratios initially claimed for their fractal methods.
The 1000-to-1 compression advertised was achieved only for a 'rigged'
class of images, with human assistance. The best unaided
performance I've heard of is good lossy compression of about 80-1.
Steve Tate <srt@duke.cs.duke.edu> confirms:
Compression ratios (unzoomed) seem to range from 20:1 to 60:1... The
quality is considerably worse than wavelets or JPEG on most of the
non-contrived images I have seen.
There is a fractal image compression demo program available via anonymous
ftp in lyapunov.ucsd.edu:/pub/fractal_image_processing/all.tar.Z.
There are executables and sample images in the same directory.
~References:
M. Barnsley, L. Anson, "Graphics Compression Technology, SunWorld,
October 1991, pp. 42-52.
M.F. Barnsley, A. Jacquin, F. Malassenet, L. Reuter & A.D. Sloan,
'Harnessing chaos for image synthesis', Computer Graphics,
vol 22 no 4 pp 131-140, 1988.
M.F. Barnsley, A.E. Jacquin, 'Application of recurrent iterated
function systems to images', Visual Comm. and Image Processing,
vol SPIE-1001, 1988.
A. Jacquin, "Image Coding Based on a Fractal Theory of Iterated Contractive
Image Transformations" p.18, January 1992 (Vol 1 Issue 1) of IEEE Trans
on Image Processing.
A. Jacquin, A Fractal Theory of Iterated Markov Operators with
Applications to Digital Image Coding, PhD Thesis, Georgia Tech, 1989.
A.E. Jacquin, 'A novel fractal block-coding technique for digital
images', Proc. ICASSP 1990.
A. Jacquin, 'Fractal image coding based on a theory of iterated
contractive image transformations', Visual Comm. and Image
Processing, vol SPIE-1360, 1990.
G.E. Oien, S. Lepsoy & T.A. Ramstad, 'An inner product space
approach to image coding by contractive transformations',
Proc. ICASSP 1991, pp 2773-2776.
D.S. Mazel, Fractal Modeling of Time-Series Data, PhD Thesis,
Georgia Tech, 1991. (One dimensional, not pictures)
S. A. Hollatz, "Digital image compression with two-dimensional affine
fractal interpolation functions", Department of Mathematics and
Statistics, University of Minnesota-Duluth, Technical Report 91-2.
(a nuts-and-bolts how-to-do-it paper on the technique)
Stark, J., ``Iterated function systems as neural networks'',
Neural Networks, Vol 4, pp 679-690, Pergamon Press, 1991.
Barnsley's company is:
Iterated Systems Inc. Contacts: Alan Sloan
5550 Peachtree Parkway Rick Darby
Norcross or Louisa Anson (technical)
Atlanta, Georgia
GA 30092
Tel: 404-840-0633
Fax: 404-840-0806
------------------------------------------------------------------------------
~Subject: [11] What is the V.42bis standard?
from Alejo Hausner <hausner@qucis.queensu.ca>:
The V.42bis Compression Standard was proposed by the International
Consultative Committee on Telephony and Telegraphy (CCITT) as an
addition to the v.42 error-correction protocol for modems. Its purpose
is to increase data throughput, and uses a variant of the
Lempel-Ziv-Welch (LZW) compression method. It is meant to be
implemented in the modem hardware, but can also be built into the
software that interfaces to an ordinary non-compressing modem.
V.42bis can send data compressed or not, depending on the
data. There are some types of data that cannot be
compressed. For example, if a file was compressed first,
and then sent through a V.42bis modem, the modem would not
likely reduce the number of bits sent. Indeed it is likely
that the amount of data would increase somewhat.
To avoid this problem, the algorithm constantly monitors the
compressibility of the data, and if it finds fewer bits
would be necessary to send it uncompressed, it switches to
transparent mode. The sender informs the receiver of this
transition through a reserved escape code. Henceforth the
data is passed as plain bytes.
The choice of escape code is clever. Initially, it is a
zero byte. Any occurrence of the escape code is replaced,
as is customary, by two escape codes. In order to prevent a
string of escape codes from temporarily cutting throughput
in half, the escape code is redefined by adding 51 mod 255
each time it is used.
While transmitting in transparent mode, the sender maintains
the LZW trees of strings, and expects the receiver to do
likewise. If it finds an advantage in returning to
compressed mode, it will do so, first informing the receiver
by a special control code. Thus the method allows the
hardware to adapt to the compressibility of the data.
The CCITT standards documents are available by ftp on ftp.uu.net
in directory /doc/standards/ccitt. Also on src.doc.ic.ac.uk,
in directory doc/ccitt-standards/ccitt. The v42bis standard is in
/doc/ccitt-standards/ccitt/1992/v/v42bis.asc.Z.
------------------------------------------------------------------------------
~Subject: [12] I need specs and source for TIFF and CCITT group 4 Fax
Specs for Group 3 and 4 image coding (group 3 is very similar to group 4)
are in CCITT (1988) volume VII fascicle VII.3. They are recommendations
T.4 and T.6 respectively. There is also an updated spec contained in 1992
recommendations T.1 to T.6.
CCITT specs are available by anonymous ftp (see above answer on V.42bis).
The T.4 spec is in ccitt/1988/ascii/7_3_01.txt.Z, the T.6 spec
is in 7_3_02.txt.Z.
Source code can be obtained as part of a TIFF toolkit - TIFF image
compression techniques for binary images include CCITT T.4 and T.6:
sgi.com:/graphics/tiff/v3.0beta.tar.Z [192.48.153.1]
Contact: sam@sgi.com
There is also a companion compressed tar file (v3.0pics.tar.Z) that
has sample TIFF image files. A draft of TIFF 6.0 is in TIFF6.ps.Z.
See also question 54 below.
------------------------------------------------------------------------------
~Subject: [13] What is JPEG?
JPEG (pronounced "jay-peg") is a standardized image compression mechanism.
JPEG stands for Joint Photographic Experts Group, the original name of the
committee that wrote the standard. JPEG is designed for compressing either
full-color or gray-scale digital images of "natural" (real-world) scenes.
JPEG does not handle black-and-white (1-bit-per-pixel) images, nor does it
handle motion picture compression. (Standards for compressing those types
of images are being worked on by other committees, named JBIG and MPEG
respectively.)
A good introduction to JPEG is posted regularly in news.answers by
Tom Lane <tgl+@cs.cmu.edu>. (See question 53 "Where are FAQ lists archived"
if this posting has expired at your site.)
------------------------------------------------------------------------------
~Subject: [14] Are there algorithms and standards for audio compression?
Yes. See the introduction to MPEG given in part 2 of this FAQ.
Copied from the comp.dsp FAQ posted by guido@cwi.nl (Guido van Rossum):
Strange though it seems, audio data is remarkably hard to compress
effectively. For 8-bit data, a Huffman encoding of the deltas between
successive samples is relatively successful. For 16-bit data,
companies like Sony and Philips have spent millions to develop
proprietary schemes.
Public standards for voice compression are slowly gaining popularity,
e.g. CCITT G.721 and G.723 (ADPCM at 32 and 24 kbits/sec). (ADPCM ==
Adaptive Delta Pulse Code Modulation.)
There are also two US federal standards, 1016 (Code excited linear
prediction (CELP), 4800 bits/s) and 1015 (LPC-10E, 2400 bits/s). See
also the appendix for 1016.
(Note that U-LAW and silence detection can also be considered
compression schemes.)
------------------------------------------------------------------------------
~Subject: [15] I need source for the winners of the Dr Dobbs compression contest
The source of the top 6 programs of the Feb 91 Dr Dobbs data compression
contest are available by ftp on
wsmr-simtel20.army.mil in pd1:<msdos.compress>ddjcompr.zip. [192.88.110.2]
garbo.uwasa.fi:/pc/source/ddjcompr.zip [128.214.87.1]
The sources are in MSDOS end-of-line format, one directory per program.
Unix or VMS users, use "unzip -ad ddjcompr" to get correct end-of-lines
and recreate the directory structure. Five of the 6 programs are not
portable and only run on MSDOS.
------------------------------------------------------------------------------
~Subject: [16] I am looking for source of an H.261 codec.
from Thierry TURLETTI <turletti@sophia.inria.fr>:
We have implemented a software version of H.261 codec.
It runs on top of UNIX and X-Windows. The coder uses the simple video capture
board "VideoPix" provided by SUN for the SparcStation. The output is directed
towards a standard TCP connection, instead of the leased lines or switched
circuits for which regular H.261 codecs are designed. This enable us to test
video conferences over regular internet connections.
We have to polish it a bit, but the first release is now available by anonymous
ftp from avahi.inria.fr, in "/pub/h261.tar.Z".
------------------------------------------------------------------------------
~Subject: [17] I need source for arithmetic coding
(See question 70 for an introduction to arithmetic coding.)
Kris Popat <popat@image.mit.edu> has worked on "Scalar Quantization
with Arithmetic Coding." It describes an arithmetic coding technique
which is quite general and computationally inexpensive. The
documentation and example C code are available via anonymous ftp from
media-lab.media.mit.edu (18.85.0.2), in /pub/k-arith-code.
------------------------------------------------------------------------------
~Subject: [30] My archive is corrupted!
The two most common reasons for this are
(1) failing to use the magic word "tenex" (when connected to SIMTEL20 and
other TOPS20 systems) or "binary" (when connected to UNIX systems) when
transferring the file from an ftp site to your host machine. The
reasons for this are technical and boring. A synonym for "tenex" is
"type L 8", in case your ftp doesn't know what "tenex" means.
(2) failing to use an eight-bit binary transfer protocol when transferring
the file from the host to your PC. Make sure to set the transfer type
to "binary" on both your host machine and your PC.
------------------------------------------------------------------------------
~Subject: [31] pkunzip reports a CRC error!
The portable zip contains many workarounds for undocumented restrictions
in pkunzip. Compatibility is ensured for pkunzip 1.10 only. All previous
versions (pkunzip 1.0x) have too many bugs and cannot be supported. This
includes Borland unzip.
So if your pkunzip reports a CRC error, check that you are not using
an obsolete version. Get either pkzip 1.10 or unzip 4.2 (see question
2 above for ftp sites).
Immediately after zip 1.0 was released, a new undocumented feature
of pkunzip was discovered, which causes CRC errors even with pkunzip 1.10
on rare occasions. A patch is available on valeria.cs.ucla.edu in
/pub/zip10.patch.
------------------------------------------------------------------------------
~Subject: [32] VMS zip is not compatible with pkzip!
The problem is most likely in the file transfer program.
Many use kermit to transfer zipped files between PC and VMS VAX. The
following VMS kermit settings make VMS-ZIP compatible with PKZIP:
VMS kermit PC kermit
--------------- --------------
Uploading PKZIPped file to be UNZIPped: set fi ty fixed set fi ty bi
Downloading ZIPped file to be PKUNZIPped: set fi ty block set fi ty bi
If you are not using kermit, transfer a file created by pkzip on MSDOS
to VMS, transfer it back to your PC and check that pkunzip can extract it.
------------------------------------------------------------------------------
~Subject: [50] What is this 'tar' compression program?
tar is not a compression program. It just combines several files
into one, without compressing them. tar file are often compressed with
'compress', resulting in a .tar.Z file. See question 2, file type .tar.Z.
(However, some versions of tar have the capability to compress files
as well.)
When you have to archive a lot of very small files, it is often
preferable to create a single .tar file and compress it, than to
compress the individual files separately. The compression program can
thus take advantage of redundancy between separate files. The
disadvantage is that you must uncompress the whole .tar file to
extract any member.
------------------------------------------------------------------------------
~Subject: [51] I need a CRC algorithm
As its name implies (Cyclic Redundancy Check) a crc adds redundancy
whereas the topic of this group is to remove it. But since this
question comes up often, here is some code (by Rob Warnock <rpw3@sgi.com>).
The following C code does CRC-32 in BigEndian/BigEndian byte/bit order.
That is, the data is sent most significant byte first, and each of the bits
within a byte is sent most significant bit first, as in FDDI. You will need
to twiddle with it to do Ethernet CRC, i.e., BigEndian/LittleEndian byte/bit
order. [Left as an exercise for the reader.]
The CRCs this code generates agree with the vendor-supplied Verilog models
of several of the popular FDDI "MAC" chips.
u_long crc32_table[256];
/* Initialized first time "crc32()" is called. If you prefer, you can
* statically initialize it at compile time. [Another exercise.]
*/
u_long crc32(u_char *buf, int len)
{
u_char *p;
u_long crc;
if (!crc32_table[1]) /* if not already done, */
init_crc32(); /* build table */
crc = 0xffffffff; /* preload shift register, per CRC-32 spec */
for (p = buf; len > 0; ++p, --len)
crc = (crc << 8) ^ crc32_table[(crc >> 24) ^ *p];
return ~crc; /* transmit complement, per CRC-32 spec */
}
/*
* Build auxiliary table for parallel byte-at-a-time CRC-32.
*/
#define CRC32_POLY 0x04c11db7 /* AUTODIN II, Ethernet, & FDDI */
init_crc32()
{
int i, j;
u_long c;
for (i = 0; i < 256; ++i) {
for (c = i << 24, j = 8; j > 0; --j)
c = c & 0x80000000 ? (c << 1) ^ CRC32_POLY : (c << 1);
crc32_table[i] = c;
}
}
------------------------------------------------------------------------------
~Subject: [52] What about those people who continue to ask frequently asked
questions in spite of the frequently asked questions document?
Just send them a polite mail message, referring them to this document.
There is no need to flame them on comp.compression. That would just
add more noise to this group. Posted answers that are in the FAQ are
just as annoying as posted questions that are in the FAQ.
------------------------------------------------------------------------------
~Subject: [53] Where are FAQ lists archived?
Many are crossposted to news.answers. That newsgroup should have a
long expiry time at your site; if not, talk to your sysadmin.
FAQ lists are available by anonymous FTP from pit-manager.mit.edu
(18.72.1.58) and by email from mail-server@pit-manager.mit.edu (send
a message containing "help" for instructions about the mail server).
This posting is /pub/usenet/news.answers/compression-faq/part1.
Part 2 is in (guess?) compression-faq/part2.
------------------------------------------------------------------------------
~Subject: [54] I need specs for graphics formats
Have a look in directory public/graphics.formats on titan.rice.edu.
It contains descriptions of gif, tiff, fits, etc...
See also the FAQ list for comp.graphics.
------------------------------------------------------------------------------
~Subject: [55] Where can I find Lenna and other images?
A bunch of standard images (lenna, baboon, cameraman, crowd, moon
etc..) were on ftp site gauss.eedsp.gatech.edu (130.207.226.2) in
directory /database/images. On Apr 1st, the system manager said:
this site has had some hardware problems and will have the image
database back online as soon as the problems get corrected. However
the images are still not there (June 4th).
The site ftp.ipl.rpi.edu also has standard images, in two directories:
ftp.ipl.rpi.edu:/pub/image/still/usc
ftp.ipl.rpi.edu:/pub/image/still/canon
In each of those directories are the following directories:
bgr - 24 bit blue, green, red
color - 24 bit red, green, blue
gray - 8 bit grayscale uniform weighted
gray601 - 8 bit grayscale CCIR-601 weighted
And in these directories are the actual images.
For example, the popular lena image is in
ftp.ipl.rpi.edu:/pub/image/still/usc/color/lena # 24 bit RGB
ftp.ipl.rpi.edu:/pub/image/still/usc/bgr/lena # 24 bit BGR
ftp.ipl.rpi.edu:/pub/image/still/usc/gray/lena # 8 bit gray
All of the images are in Sun rasterfile format. You can use the pbm
utilities to convert them to whatever format is most convenient.
[pbm is available in ftp.ee.lbl.gov:/pbmplus*.tar.Z].
Questions about the ipl archive should be sent to rodney@ipl.rpi.edu.
The archive maintainer at ftp.ipl.rpi.edu is interested in some method
of establishing a canonical ftp database of images and could volunteer
the ipl to be an ftp site for that database. Send suggestions to
rodney@ipl.rpi.edu.
Beware: the same image often comes in many different forms, at
different resolutions, etc... The original lenna image is 512 wide,
512 high, 8 bits per pel, red, green and blue fields. Gray-scale
versions of Lenna have been obtained in two different ways from the
original:
(1) Using the green field as a gray-scale image, and
(2) Doing an RGB->YUV transformation and saving the Y component.
Method (1) makes it easier to compare different people's results since
everyone's version should be the same using that method. Method (2)
produces a more correct image.
For the curious: 'lena' or 'lenna' is a digitized Playboy centerfold,
from November 1972. (Lenna is the spelling in Playboy, Lena is the
Swedish spelling of the name.) Lena Soderberg (ne Sjooblom) was last
reported living in her native Sweden, happily married with three kids
and a job with the state liquor monopoly. In 1988, she was
interviewed by some Swedish computer related publication, and she was
pleasantly amused by what had happened to her picture. That was the
first she knew of the use of that picture in the computer business.
The editorial in the January 1992 issue of Optical Engineering (v. 31
no. 1) details how Playboy has finally caught on to the fact that
their copyright on Lenna Sjooblom's photo is being widely infringed.
It sounds as if you will have to get permission from Playboy to
publish it in the future.
----Part 2
Contents
========
(Long) introductions to data compression techniques
[70] Introduction to data compression (long)
Huffman and Related Compression Techniques
Arithmetic Coding
Substitutional Compressors
The LZ78 family of compressors
The LZ77 family of compressors
[71] Introduction to MPEG (long)
What is MPEG?
Does it have anything to do with JPEG?
Then what's JBIG and MHEG?
What has MPEG accomplished?
So how does MPEG I work?
What about the audio compression?
So how much does it compress?
What's phase II?
When will all this be finished?
How do I join MPEG?
How do I get the documents, like the MPEG I draft?
[72] What is wavelet theory?
[73] What is the theoretical compression limit?
[74] Introduction to JBIG
[99] Acknowledgments
Search for "Subject: [#]" to get to question number # quickly. Some news
readers can also take advantage of the message digest format used here.
------------------------------------------------------------------------------
~Subject: [70] Introduction to data compression (long)
Written by Peter Gutmann <pgut1@cs.aukuni.ac.nz>.
Huffman and Related Compression Techniques
------------------------------------------
*Huffman compression* is a statistical data compression technique which
gives a reduction in the average code length used to represent the symbols of
a alphabet. The Huffman code is an example of a code which is optimal in the
case where all symbols probabilities are integral powers of 1/2. A Huffman
code can be built in the following manner:
(1) Rank all symbols in order of probability of occurrence.
(2) Successively combine the two symbols of the lowest probability to form
a new composite symbol; eventually we will build a binary tree where
each node is the probability of all nodes beneath it.
(3) Trace a path to each leaf, noticing the direction at each node.
For a given frequency distribution, there are many possible Huffman codes,
but the total compressed length will be the same. It is possible to
define a 'canonical' Huffman tree, that is, pick one of these alternative
trees. Such a canonical tree can then be represented very compactly, by
transmitting only the bit length of each code. This technique is used
in most archivers (pkzip, lha, zoo, arj, ...).
A technique related to Huffman coding is *Shannon-Fano coding*, which was
suggested by Shannon and Weaver in 1949 and modified by Fano in 1961. It
works as follows:
(1} Rank all symbols in order of probability of occurrence.
(2) Successively divide the set of symbols into two equal or almost equal
subsets based on the probability of occurrence of characters in each
subset. The first symbol in one subset is assigned a binary zero, the
second a binary one.
The algorithm used to create the Huffman codes is bottom-up, and the
one for the Shannon-Fano codes is top-down. Huffman encoding always
generates optimal codes, Shannon-Fano sometimes uses a few more bits.
Arithmetic Coding
-----------------
It would appear that Huffman or Shannon-Fano coding is the perfect
means of compressing data. However, this is *not* the case. As
mentioned above, these coding methods are optimal when and only when
the symbol probabilities are integral powers of 1/2, which is usually
not the case.
The technique of *arithmetic coding* does not have this restriction:
It achieves the same effect as treating the message as one single unit
(a technique which would, for Huffman coding, require enumeration of
every single possible message), and thus attains the theoretical
entropy bound to compression efficiency for any source.
Arithmetic coding works by representing a number by an interval of real
numbers between 0 and 1. As the message becomes longer, the interval needed
to represent it becomes smaller and smaller, and the number of bits needed to
specify that interval increases. Successive symbols in the message reduce
this interval in accordance with the probability of that symbol. The more
likely symbols reduce the range by less, and thus add fewer bits to the
message.
1 Codewords
+-----------+-----------+-----------+ /-----\
| |8/9 YY | Detail |<- 31/32 .11111
| +-----------+-----------+<- 15/16 .1111
| Y | | too small |<- 14/16 .1110
|2/3 | YX | for text |<- 6/8 .110
+-----------+-----------+-----------+
| | |16/27 XYY |<- 10/16 .1010
| | +-----------+
| | XY | |
| | | XYX |<- 4/8 .100
| |4/9 | |
| +-----------+-----------+
| | | |
| X | | XXY |<- 3/8 .011
| | |8/27 |
| | +-----------+
| | XX | |
| | | |<- 1/4 .01
| | | XXX |
| | | |
|0 | | |
+-----------+-----------+-----------+
As an example of arithmetic coding, lets consider the example of two
symbols X and Y, of probabilities 0.66 and 0.33. To encode this message, we
examine the first symbol: If it is a X, we choose the lower partition; if
it is a Y, we choose the upper partition. Continuing in this manner for
three symbols, we get the codewords shown to the right of the diagram above
- they can be found by simply taking an appropriate location in the
interval for that particular set of symbols and turning it into a binary
fraction. In practice, it is also necessary to add a special end-of-data
symbol, which is not represented in this simpe example.
In this case the arithmetic code is not completely efficient, which is due
to the shortness of the message - with longer messages the coding efficiency
does indeed approach 100%.
Now that we have an efficient encoding technique, what can we do with it?
What we need is a technique for building a model of the data which we can
then use with the encoder. The simplest model is a fixed one, for example a
table of standard letter frequencies for English text which we can then use
to get letter probabilities. An improvement on this technique is to use an
*adaptive model*, in other words a model which adjusts itself to the data
which is being compressed as the data is compressed. We can convert the
fixed model into an adaptive one by adjusting the symbol frequencies after
each new symbol is encoded, allowing the model to track the data being
transmitted. However, we can do much better than that.
Using the symbol probabilities by themselves is not a particularly good
estimate of the true entropy of the data: We can take into account
intersymbol probabilities as well. The best compressors available today
take this approach: DMC (Dynamic Markov Coding) starts with a zero-order
Markov model and gradually extends this initial model as compression
progresses; PPM (Prediction by Partial Matching) looks for a match of the
text to be compressed in an order-n context. If no match is found, it
drops to an order n-1 context, until it reaches order 0. Both these
techniques thus obtain a much better model of the data to be compressed,
which, combined with the use of arithmetic coding, results in superior
compression performance.
So if arithmetic coding-based compressors are so powerful, why are they not
used universally? Apart from the fact that they are relatively new and
haven't come into general use too much yet, there is also one major concern:
The fact that they consume rather large amounts of computing resources, both
in terms of CPU power and memory. The building of sophisticated models for
the compression can chew through a fair amount of memory (especially in the
case of DMC, where the model can grow without bounds); and the arithmetic
coding itself involves a fair amount of number crunching.
There is however an alternative approach, a class of compressors generally
referred to as *substitutional* or *dictionary-based compressors*.
Substitutional Compressors
--------------------------
The basic idea behind a substitutional compressor is to replace an
occurrence of a particular phrase or group of bytes in a piece of data with a
reference to a previous occurrence of that phrase. There are two main
classes of schemes, named after Jakob Ziv and Abraham Lempel, who first
proposed them in 1977 and 1978.
<The LZ78 family of compressors>
LZ78-based schemes work by entering phrases into a *dictionary* and then,
when a repeat occurrence of that particular phrase is found, outputting the
dictionary index instead of the phrase. There exist several compression
algorithms based on this principle, differing mainly in the manner in which
they manage the dictionary. The most well-known scheme (in fact the most
well-known of all the Lempel-Ziv compressors, the one which is generally (and
mistakenly) referred to as "Lempel-Ziv Compression"), is Terry Welch's LZW
scheme, which he designed in 1984 for implementation in hardware for high-
performance disk controllers.
Input string: /WED/WE/WEE/WEB
Character input: Code output: New code value and associated string:
/W / 256 = /W
E W 257 = WE
D E 258 = ED
/ D 259 = D/
WE 256 260 = /WE
/ E 261 = E/
WEE 260 262 = /WEE
/W 261 263 = E/W
EB 257 264 = WEB
<END> B
LZW starts with a 4K dictionary, of which entries 0-255 refer to individual
bytes, and entries 256-4095 refer to substrings. Each time a new code is
generated it means a new string has been parsed. New strings are generated
by appending the current character K to the end of an existing string w. The
algorithm for LZW compression is as follows:
set w = NIL
loop
read a character K
if wK exists is in the dictionary
w = wK
else
output the code for w
add wK to the string table
w = K
endloop
A sample run of LZW over a (highly redundant) input string can be seen in
the diagram above. The strings are built up character-by-character starting
with a code value of 256. LZW decompression takes the stream of codes and
uses it to exactly recreate the original input data. Just like the
compression algorithm, the decompressor adds a new string to the dictionary
each time it reads in a new code. All it needs to do in addition is to
translate each incoming code into a string and send it to the output. A
sample run of the LZW decompressor is shown in below.
Input code: /WED<256>E<260><261><257>B
Input code: Output string: New code value and associated string:
/ /
W W 256 = /W
E E 257 = WE
D D 258 = ED
256 /W 259 = D/
E E 260 = /WE
260 /WE 261 = E/
261 E/ 262 = /WEE
257 WE 263 = E/W
B B 264 = WEB
The most remarkable feature of this type of compression is that the entire
dictionary has been transmitted to the decoder without actually explicitly
transmitting the dictionary. At the end of the run, the decoder will have a
dictionary identical to the one the encoder has, built up entirely as part of
the decoding process.
LZW is more commonly encountered today in a variant known as LZC, after
its use in the UNIX "compress" program. In this variant, pointers do not
have a fixed length. Rather, they start with a length of 9 bits, and then
slowly grow to their maximum possible length once all the pointers of a
particular size have been used up. Furthermore, the dictionary is not frozen
once it is full as for LZW - the program continually monitors compression
performance, and once this starts decreasing the entire dictionary is
discarded and rebuilt from scratch. More recent schemes use some sort of
least-recently-used algorithm to discard little-used phrases once the
dictionary becomes full rather than throwing away the entire dictionary.
Finally, not all schemes build up the dictionary by adding a single new
character to the end of the current phrase. An alternative technique is to
concatenate the previous two phrases (LZMW), which results in a faster
buildup of longer phrases than the character-by-character buildup of the
other methods. The disadvantage of this method is that a more sophisticated
data structure is needed to handle the dictionary.
[A good introduction to LZW, MW, AP and Y coding is given in the yabba
package. For ftp information, see question 2 in part one, file type .Y]
<The LZ77 family of compressors>
LZ77-based schemes keep track of the last n bytes of data seen, and when a
phrase is encountered that has already been seen, they output a pair of
values corresponding to the position of the phrase in the previously-seen
buffer of data, and the length of the phrase. In effect the compressor moves
a fixed-size *window* over the data (generally referred to as a *sliding
window*), with the position part of the (position, length) pair referring to
the position of the phrase within the window. The most commonly used
algorithms are derived from the LZSS scheme described by James Storer and
Thomas Szymanski in 1982. In this the compressor maintains a window of size
N bytes and a *lookahead buffer* the contents of which it tries to find a
match for in the window:
while( lookAheadBuffer not empty )
{
get a pointer ( position, match ) to the longest match in the window
for the lookahead buffer;
if( length > MINIMUM_MATCH_LENGTH )
{
output a ( position, length ) pair;
shift the window length characters along;
}
else
{
output the first character in the lookahead buffer;
shift the window 1 character along;
}
}
Decompression is simple and fast: Whenever a ( position, length ) pair is
encountered, go to that ( position ) in the window and copy ( length ) bytes
to the output.
Sliding-window-based schemes can be simplified by numbering the input text
characters mod N, in effect creating a circular buffer. The sliding window
approach automatically creates the LRU effect which must be done explicitly in
LZ78 schemes. Variants of this method apply additional compression to the
output of the LZSS compressor, which include a simple variable-length code
(LZB), dynamic Huffman coding (LZH), and Shannon-Fano coding (ZIP 1.x)), all
of which result in a certain degree of improvement over the basic scheme,
especially when the data are rather random and the LZSS compressor has little
effect.
Recently an algorithm was developed which combines the ideas behind LZ77 and
LZ78 to produce a hybrid called LZFG. LZFG uses the standard sliding window,
but stores the data in a modified trie data structure and produces as output
the position of the text in the trie. Since LZFG only inserts complete
*phrases* into the dictionary, it should run faster than other LZ77-based
compressors.
All popular archivers (arj, lha, zip, zoo) are variations on the LZ77 theme.
------------------------------------------------------------------------------
~Subject: [71] Introduction to MPEG (long)
Written by Mark Adler <madler@cco.caltech.edu>.
Q. What is MPEG?
A. MPEG is a group of people that meet under ISO (the International
Standards Organization) to generate standards for digital video
(sequences of images in time) and audio compression. In particular,
they define a compressed bit stream, which implicitly defines a
decompressor. However, the compression algorithms are up to the
individual manufacturers, and that is where proprietary advantage
is obtained within the scope of a publicly available international
standard. MPEG meets roughly four times a year for roughly a week
each time. In between meetings, a great deal of work is done by
the members, so it doesn't all happen at the meetings. The work
is organized and planned at the meetings.
Q. So what does MPEG stand for?
A. Moving Pictures Experts Group.
Q. Does it have anything to do with JPEG?
A. Well, it sounds the same, and they are part of the same subcommittee
of ISO along with JBIG and MHEG, and they usually meet at the same
place at the same time. However, they are different sets of people
with few or no common individual members, and they have different
charters and requirements. JPEG is for still image compression.
Q. Then what's JBIG and MHEG?
A. Sorry I mentioned them. Ok, I'll simply say that JBIG is for binary
image compression (like faxes), and MHEG is for multi-media data
standards (like integrating stills, video, audio, text, etc.).
For an introduction to JBIG, see question 54 below.
Q. Ok, I'll stick to MPEG. What has MPEG accomplished?
A. So far (as of January 1992), they have completed the "Committee
Draft" of MPEG phase I, colloquially called MPEG I. It defines
a bit stream for compressed video and audio optimized to fit into
a bandwidth (data rate) of 1.5 Mbits/s. This rate is special
because it is the data rate of (uncompressed) audio CD's and DAT's.
The draft is in three parts, video, audio, and systems, where the
last part gives the integration of the audio and video streams
with the proper timestamping to allow synchronization of the two.
They have also gotten well into MPEG phase II, whose task is to
define a bitstream for video and audio coded at around 3 to 10
Mbits/s.
Q. So how does MPEG I work?
A. First off, it starts with a relatively low resolution video
sequence (possibly decimated from the original) of about 352 by
240 frames by 30 frames/s (US--different numbers for Europe),
but original high (CD) quality audio. The images are in color,
but converted to YUV space, and the two chrominance channels
(U and V) are decimated further to 176 by 120 pixels. It turns
out that you can get away with a lot less resolution in those
channels and not notice it, at least in "natural" (not computer
generated) images.
The basic scheme is to predict motion from frame to frame in the
temporal direction, and then to use DCT's (discrete cosine
transforms) to organize the redundancy in the spatial directions.
The DCT's are done on 8x8 blocks, and the motion prediction is
done in the luminance (Y) channel on 16x16 blocks. In other words,
given the 16x16 block in the current frame that you are trying to
code, you look for a close match to that block in a previous or
future frame (there are backward prediction modes where later
frames are sent first to allow interpolating between frames).
The DCT coefficients (of either the actual data, or the difference
between this block and the close match) are "quantized", which
means that you divide them by some value to drop bits off the
bottom end. Hopefully, many of the coefficients will then end up
being zero. The quantization can change for every "macroblock"
(a macroblock is 16x16 of Y and the corresponding 8x8's in both
U and V). The results of all of this, which include the DCT
coefficients, the motion vectors, and the quantization parameters
(and other stuff) is Huffman coded using fixed tables. The DCT
coefficients have a special Huffman table that is "two-dimensional"
in that one code specifies a run-length of zeros and the non-zero
value that ended the run. Also, the motion vectors and the DC
DCT components are DPCM (subtracted from the last one) coded.
Q. So is each frame predicted from the last frame?
A. No. The scheme is a little more complicated than that. There are
three types of coded frames. There are "I" or intra frames. They
are simply a frame coded as a still image, not using any past
history. You have to start somewhere. Then there are "P" or
predicted frames. They are predicted from the most recently
reconstructed I or P frame. (I'm describing this from the point
of view of the decompressor.) Each macroblock in a P frame can
either come with a vector and difference DCT coefficients for a
close match in the last I or P, or it can just be "intra" coded
(like in the I frames) if there was no good match.
Lastly, there are "B" or bidirectional frames. They are predicted
from the closest two I or P frames, one in the past and one in the
future. You search for matching blocks in those frames, and try
three different things to see which works best. (Now I have the
point of view of the compressor, just to confuse you.) You try using
the forward vector, the backward vector, and you try averaging the
two blocks from the future and past frames, and subtracting that from
the block being coded. If none of those work well, you can intra-
code the block.
The sequence of decoded frames usually goes like:
IBBPBBPBBPBBIBBPBBPB...
Where there are 12 frames from I to I (for US and Japan anyway.)
This is based on a random access requirement that you need a
starting point at least once every 0.4 seconds or so. The ratio
of P's to B's is based on experience.
Of course, for the decoder to work, you have to send that first
P *before* the first two B's, so the compressed data stream ends
up looking like:
0xx312645...
where those are frame numbers. xx might be nothing (if this is
the true starting point), or it might be the B's of frames -2 and
-1 if we're in the middle of the stream somewhere.
You have to decode the I, then decode the P, keep both of those
in memory, and then decode the two B's. You probably display the
I while you're decoding the P, and display the B's as you're
decoding them, and then display the P as you're decoding the next
P, and so on.
Q. You've got to be kidding.
A. No, really!
Q. Hmm. Where did they get 352x240?
A. That derives from the CCIR-601 digital television standard which
is used by professional digital video equipment. It is (in the US)
720 by 243 by 60 fields (not frames) per second, where the fields
are interlaced when displayed. (It is important to note though
that fields are actually acquired and displayed a 60th of a second
apart.) The chrominance channels are 360 by 243 by 60 fields a
second, again interlaced. This degree of chrominance decimation
(2:1 in the horizontal direction) is called 4:2:2. The source
input format for MPEG I, called SIF, is CCIR-601 decimated by 2:1
in the horizontal direction, 2:1 in the time direction, and an
additional 2:1 in the chrominance vertical direction. And some
lines are cut off to make sure things divide by 8 or 16 where
needed.
Q. What if I'm in Europe?
A. For 50 Hz display standards (PAL, SECAM) change the number of lines
in a field from 243 or 240 to 288, and change the display rate to
50 fields/s or 25 frames/s. Similarly, change the 120 lines in
the decimated chrominance channels to 144 lines. Since 288*50 is
exactly equal to 240*60, the two formats have the same source data
rate.
Q. You didn't mention anything about the audio compression.
A. Oh, right. Well, I don't know as much about the audio compression.
Basically they use very carefully developed psychoacoustic models
derived from experiments with the best obtainable listeners to
pick out pieces of the sound that you can't hear. There are what
are called "masking" effects where, for example, a large component
at one frequency will prevent you from hearing lower energy parts
at nearby frequencies, where the relative energy vs. frequency
that is masked is described by some empirical curve. There are
similar temporal masking effects, as well as some more complicated
interactions where a temporal effect can unmask a frequency, and
vice-versa.
The sound is broken up into spectral chunks with a hybrid scheme
that combines sine transforms with subband transforms, and the
psychoacoustic model written in terms of those chunks. Whatever
can be removed or reduced in precision is, and the remainder is
sent. It's a little more complicated than that, since the bits
have to be allocated across the bands. And, of course, what is
sent is entropy coded.
Q. So how much does it compress?
A. As I mentioned before, audio CD data rates are about 1.5 Mbits/s.
You can compress the same stereo program down to 256 Kbits/s with
no loss in discernable quality. (So they say. For the most part
it's true, but every once in a while a weird thing might happen
that you'll notice. However the effect is very small, and it takes
a listener trained to notice these particular types of effects.)
That's about 6:1 compression. So, a CD MPEG I stream would have
about 1.25 MBits/s left for video. The number I usually see though
is 1.15 MBits/s (maybe you need the rest for the system data
stream). You can then calculate the video compression ratio from
the numbers here to be about 26:1. If you step back and think
about that, it's little short of a miracle. Of course, it's lossy
compression, but it can be pretty hard sometimes to see the loss,
if you're comparing the SIF original to the SIF decompressed. There
is, however, a very noticeable loss if you're coming from CCIR-601
and have to decimate to SIF, but that's another matter. I'm not
counting that in the 26:1.
The standard also provides for other bit rates ranging from 32Kbits/s
for a single channel, up to 448 Kbits/s for stereo.
Q. What's phase II?
A. As I said, there is a considerable loss of quality in going from
CCIR-601 to SIF resolution. For entertainment video, it's simply
not acceptable. You want to use more bits and code all or almost
all the CCIR-601 data. From subjective testing at the Japan
meeting in November 1991, it seems that 4 MBits/s can give very
good quality compared to the original CCIR-601 material. The
objective of phase II is to define a bit stream optimized for these
resolutions and bit rates.
Q. Why not just scale up what you're doing with MPEG I?
A. The main difficulty is the interlacing. The simplest way to extend
MPEG I to interlaced material is to put the fields together into
frames (720x486x30/s). This results in bad motion artifacts that
stem from the fact that moving objects are in different places
in the two fields, and so don't line up in the frames. Compressing
and decompressing without taking that into account somehow tends to
muddle the objects in the two different fields.
The other thing you might try is to code the even and odd field
streams separately. This avoids the motion artifacts, but as you
might imagine, doesn't get very good compression since you are not
using the redundancy between the even and odd fields where there
is not much motion (which is typically most of image).
Or you can code it as a single stream of fields. Or you can
interpolate lines. Or, etc. etc. There are many things you can
try, and the point of MPEG II is to figure out what works well.
MPEG II is not limited to consider only derivations of MPEG I.
There were several non-MPEG I-like schemes in the competition in
November, and some aspects of those algorithms may or may not
make it into the final standard for entertainment video compression.
Q. So what works?
A. Basically, derivations of MPEG I worked quite well, with one that
used wavelet subband coding instead of DCT's that also worked very
well. Also among the worked-very-well's was a scheme that did not
use B frames at all, just I and P's. All of them, except maybe one,
did some sort of adaptive frame/field coding, where a decision is
made on a macroblock basis as to whether to code that one as one
frame macroblock or as two field macroblocks. Some other aspects
are how to code I-frames--some suggest predicting the even field
from the odd field. Or you can predict evens from evens and odds
or odds from evens and odds or any field from any other field, etc.
Q. So what works?
A. Ok, we're not really sure what works best yet. The next step is
to define a "test model" to start from, that incorporates most of
the salient features of the worked-very-well proposals in a
simple way. Then experiments will be done on that test model,
making a mod at a time, and seeing what makes it better and what
makes it worse. Example experiments are, B's or no B's, DCT vs.
wavelets, various field prediction modes, etc. The requirements,
such as implementation cost, quality, random access, etc. will all
feed into this process as well.
Q. When will all this be finished?
A. I don't know. I'd have to hope in about a year or less.
Q. How do I join MPEG?
A. You don't join MPEG. You have to participate in ISO as part of a
national delegation. How you get to be part of the national
delegation is up to each nation. I only know the U.S., where you
have to attend the corresponding ANSI meetings to be able to
attend the ISO meetings. Your company or institution has to be
willing to sink some bucks into travel since, naturally, these
meetings are held all over the world. (For example, Paris,
Santa Clara, Kurihama Japan, Singapore, Haifa Israel, Rio de
Janeiro, London, etc.)
Q. Well, then how do I get the documents, like the MPEG I draft?
A. If you aren't part of the process, then you have to try to get
them from your national body, which is ANSI in the U.S. ANSI
won't have any stuff (I don't think) pertaining to MPEG II, but
they should have the MPEG I Committee Draft, since it is now up
for balloting in the U.S. (as well as the other countries). It
has all the nitty gritty details about the systems, video, and
audio data streams and informative annexes about how to really
do it.
------------------------------------------------------------------------------
~Subject: [72] What is wavelet theory?
Preprints and software are available by anonymous ftp from the
Yale Mathematics Department computer ceres.math.yale.edu[130.132.23.22],
in pub/wavelets and pub/software.
epic is pyramid wavelet coder. (For source code, see question 3 in part one).
Bill Press of Harvard/CfA has made some things available for anonymous
ftp on cfata4.harvard.edu [128.103.40.79] in directory /pub. There is
a short TeX article on wavelet theory (wavelet.tex, to be included in
a future edition of Numerical Recipes), some sample wavelet code
(wavelet.f, in FORTRAN - sigh), and a beta version of an astronomical
image compression program which he is currently developing (FITS
format data files only, in fitspress08.tar.Z).
A 5 minute course in wavelet transforms, by Richard Kirk <rak@crosfield.co.uk>:
Do you know what a Haar transform is? Its a transform to another orthonormal
space (like the DFT), but the basis functions are a set of square wave bursts
like this...
+--+ +------+
+ | +------------------ + | +--------------
+--+ +------+
+--+ +------+
------+ | +------------ --------------+ | +
+--+ +------+
+--+ +-------------+
------------+ | +------ + | +
+--+ +-------------+
+--+ +---------------------------+
------------------+ | + + +
+--+
This is the set of functions for an 8-element 1-D Haar transform. You
can probably see how to extend this to higher orders and higher dimensions
yourself. This is dead easy to calculate, but it is not what is usually
understood by a wavelet transform.
If you look at the eight Haar functions you see we have four functions
that code the highest resolution detail, two functions that code the
coarser detail, one function that codes the coarser detail still, and the
top function that codes the average value for the whole `image'.
Haar function can be used to code images instead of the DFT. With bilevel
images (such as text) the result can look better, and it is quicker to code.
Flattish regions, textures, and soft edges in scanned images get a nasty
`blocking' feel to them. This is obvious on hardcopy, but can be disguised on
color CRTs by the effects of the shadow mask. The DCT gives more consistent
results.
This connects up with another bit of maths sometimes called Multispectral
Image Analysis, sometimes called Image Pyramids.
Suppose you want to produce a discretely sampled image from a continuous
function. You would do this by effectively `scanning' the function using a
sinc function [ sin(x)/x ] `aperture'. This was proved by Shannon in the
`forties. You can do the same thing starting with a high resolution
discretely sampled image. You can then get a whole set of images showing
the edges at different resolutions by differencing the image at one
resolution with another version at another resolution. If you have made this
set of images properly they ought to all add together to give the original
image.
This is an expansion of data. Suppose you started off with a 1K*1K image.
You now may have a 64*64 low resolution image plus difference images at 128*128
256*256, 512*512 and 1K*1K.
Where has this extra data come from? If you look at the difference images you
will see there is obviously some redundancy as most of the values are near
zero. From the way we constructed the levels we know that locally the average
must approach zero in all levels but the top. We could then construct a set of
functions out of the sync functions at any level so that their total value
at all higher levels is zero. This gives us an orthonormal set of basis
functions for a transform. The transform resembles the Haar transform a bit,
but has symmetric wave pulses that decay away continuously in either direction
rather than square waves that cut off sharply. This transform is the
wavelet transform ( got to the point at last!! ).
These wavelet functions have been likened to the edge detecting functions
believed to be present in the human retina.
The basis functions of the wavelet transform are harder to calculate, and there
is at present no fast wavelet transform to rival the fast DCT algorithms. Or
if there is I would like to know about it.
For image compression we usually use an 8*8 DCT rather than transform the whole
image. You could do an 8*8 wavelet transform. This could run as fast as the 8*8
DCT in many of the current hardwares, but what would this give you? The results
look much the same as the FFT.
------------------------------------------------------------------------------
~Subject: [73] What is the theoretical compression limit?
There is no compressor that is guaranteed to compress all possible input
files. If it compresses some files, then it must enlarge some others.
This can be proven by a simple counting argument (see question 9).
As an extreme example, the following algorithm achieves 100%
compression for one special input file and enlarges all other files by
only one bit:
- if the input data is <insert your favorite one here>, output an empty file.
- otherwise output one bit (zero or one) followed by the input data.
The concept of theoretical compression limit is meaningful only
if you have a model for your input data. See question 70 above
for some examples of data models.
------------------------------------------------------------------------------
~Subject: [74] Introduction to JBIG
Written by Mark Adler <madler@cco.caltech.edu>.
JBIG losslessly compresses binary (one-bit/pixel) images. (The B stands
for bi-level.) Basically it models the redundancy in the image as the
correlations of the pixel currently being coded with a set of nearby
pixels called the template. An example template might be the two
pixels preceding this one on the same line, and the five pixels centered
above this pixel on the previous line. Note that this choice only
involves pixels that have already been seen from a scanner.
The current pixel is then arithmetically coded based on the eight-bit
(including the pixel being coded) state so formed. So there are (in this
case) 256 contexts to be coded. The arithmetic coder and probability
estimator for the contexts are actually IBM's (patented) Q-coder. The
Q-coder uses low precision, rapidly adaptable (those two are related)
probability estimation combined with a multiply-less arithmetic coder.
The probability estimation is intimately tied to the interval calculations
necessary for the arithmetic coding.
JBIG actually goes beyond this and has adaptive templates, and probably
some other bells and whistles I don't know about. You can find a
description of the Q-coder as well as the ancestor of JBIG in the Nov 88
issue of the IBM Journal of Research and Development. This is a very
complete and well written set of five articles that describe the Q-coder
and a bi-level image coder that uses the Q-coder.
You can use JBIG on grey-scale or even color images by simply applying
the algorithm one bit-plane at a time. You would want to recode the
grey or color levels first though, so that adjacent levels differ in
only one bit (called Gray-coding). I hear that this works well up to
about six bits per pixel, beyond which JPEG's lossless mode works better.
You need to use the Q-coder with JPEG also to get this performance.
Actually no lossless mode works well beyond six bits per pixel, since
those low bits tend to be noise, which doesn't compress at all.
Anyway, the intent of JBIG is to replace the current, less effective
group 3 and 4 fax algorithms.
------------------------------------------------------------------------------
~Subject: [99] Acknowledgments
There are too many people to cite. Thanks to all people who directly
or indirectly contributed to this FAQ.